92 research outputs found

    Digesting omni-video along routes for navigation

    Get PDF
    poster abstractOmni-directional video records complete visual information along a route. Though replaying an omni-video presents reality, it requires significant amount of memory and communication bandwidth. This work extracts distinct views from an omni-video to form a visual digest named route sheet for navigation. We sort scenes at the motion and visibility level and investigate the similarity/redundancy of scenes in the context of a route. We use source data from 3D elevation map or omni-videos for the view selection. By condensing the flow in the video, our algorithm can generate distinct omni-view sequences with visual information as rich as the omni-video for further scene indexing and navigation with GIS data

    Smart Video Systems in Police Cars

    Get PDF
    poster abstractThe use of video cameras in police cars has been found to have significant value and the number of such installed systems has been increasing. In addition to recording the events in routine traffic stops for later use in legal settings, in-car video cameras can be used to analyze in real-time or near real-time to detect critical events and notify police headquarters for help. This poster presents methods for detecting critical events in such police car videos. The specific critical events are person running out of a stopped car and officer falling down while approaching a stopped car. In the above situations, the aim is to alert the control center immediately for backup forces, especially in the last example when the officer is incapacitated. In order to implement real-time video processing so that a quick response can be generated without employing complex, slow, and brittle video processing algorithms, we use the reduced spatiotemporal representation (1D projection profile) and Hidden Markov Model to detect these events. The methods are tested on many video shots under various environmental and illumination conditions

    Video anatomy : spatial-temporal video profile

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)A massive amount of videos are uploaded on video websites, smooth video browsing, editing, retrieval, and summarization are demanded. Most of the videos employ several types of camera operations for expanding field of view, emphasizing events, and expressing cinematic effect. To digest heterogeneous videos in video websites and databases, video clips are profiled to 2D image scroll containing both spatial and temporal information for video preview. The video profile is visually continuous, compact, scalable, and indexing to each frame. This work analyzes the camera kinematics including zoom, translation, and rotation, and categorize camera actions as their combinations. An automatic video summarization framework is proposed and developed. After conventional video clip segmentation and video segmentation for smooth camera operations, the global flow field under all camera actions has been investigated for profiling various types of video. A new algorithm has been designed to extract the major flow direction and convergence factor using condensed images. Then this work proposes a uniform scheme to segment video clips and sections, sample video volume across the major flow, compute flow convergence factor, in order to obtain an intrinsic scene space less influenced by the camera ego-motion. The motion blur technique has also been used to render dynamic targets in the profile. The resulting profile of video can be displayed in a video track to guide the access to video frames, help video editing, and facilitate the applications such as surveillance, visual archiving of environment, video retrieval, and online video preview

    Simultaneous evaluation of treatment efficacy and toxicity for bispecific T-cell engager therapeutics in a humanized mouse model.

    Get PDF
    Immuno-oncology (IO)-based therapies such as checkpoint inhibitors, bi-specific antibodies, and CAR-T-cell therapies have shown significant success in the treat- ment of several cancer indications. However, these therapies can result in the de- velopment of severe adverse events, including cytokine release syndrome (CRS). Currently, there is a paucity of in vivo models that can evaluate dose-response relationships for both tumor control and CRS-related safety issues. We tested an in vivo PBMC humanized mouse model to assess both treatment efficacy against specific tumors and the concurrent cytokine release profiles for individual human donors after treatment with a CD19xCD3 bispecific T-cell engager (BiTE). Using this model, we evaluated tumor burden, T-cell activation, and cytokine release in response to bispecific T-cell-engaging antibody in humanized mice generated with different PBMC donors. The results show that PBMC engrafted NOD-scid Il2rgnull mice lacking expression of mouse MHC class I and II (NSG-MHC-DKO mice) and implanted with a tumor xenograft predict both efficacy for tumor control by CD19xCD3 BiTE and stimulated cytokine release. Moreover, our find- ings indicate that this PBMC-engrafted model captures variability among donors for tumor control and cytokine release following treatment. Tumor control and cytokine release were reproducible for the same PBMC donor in separate experi- ments. The PBMC humanized mouse model described here is a sensitive and re- producible platform that identifies specific patient/cancer/therapy combinations for treatment efficacy and development of complications

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Locating key views for image indexing of spaces

    No full text
    Image is a dominant medium among video, 3D model, and other media for visualizing environment and creating virtual access on the Internet. The location of image capture is, however, subjective and has relied on the esthetic sense of photographers up until this point. In this paper, we will not only visualize areas with images, but also propose a general framework to determine where the most distinct viewpoints should be located. Starting from elevation data, we present spatial and content information in ground based images such that (1) a given number of images can have maximum coverage on informative scenes; (2) a set of key views can be selected with certain continuity for representing the most distinct views. According to the scene visibility, continuity, and data redundancy, we evaluate viewpoints numerically with an object-emitting illumination model. Our key view exploration may eventually reduce the visual data to transmit, facilitate image acquisition, indexing and interaction, and enhance perception of spaces. Real sample images are captured based on planned positions to form a visual network to index the area

    Key Views for Visualizing Large Spaces

    No full text
    Indiana University-Purdue University Indianapolis (IUPUI)Image is a dominant medium among video, 3D model, and other media for visualizing environment and creating virtual access on the Internet. The location of image capture is, however, subjective and has relied on the esthetic sense of photographers up until this point. In this paper, we will not only visualize areas with images, but also propose a general framework to determine where the most distinct viewpoints should be located. Starting from elevation data, we present spatial and content information in ground-based images such that (1) a given number of images can have maximum coverage on informative scenes; (2) a set of key views can be selected with certain continuity for representing the most distinct views. According to the scene visibility, continuity, and data redundancy, we evaluate viewpoints numerically with an object-emitting illumination model. Our key view exploration may eventually reduce the visual data to transmit, facilitate image acquisition, indexing and interaction, and enhance perception of spaces. Real sample images are captured based on planned positions to form a visual network to index the area
    corecore